business rule
Transparent, Evaluable, and Accessible Data Agents: A Proof-of-Concept Framework
This article presents a modular, component-based architecture for developing and evaluating AI agents that bridge the gap between natural language interfaces and complex enterprise data warehouses. The system directly addresses core challenges in data accessibility by enabling non-technical users to interact with complex data warehouses through a conversational interface, translating ambiguous user intent into precise, executable database queries to overcome semantic gaps. A cornerstone of the design is its commitment to transparent decision-making, achieved through a multi-layered reasoning framework that explains the "why" behind every decision, allowing for full interpretability by tracing conclusions through specific, activated business rules and data points. The architecture integrates a robust quality assurance mechanism via an automated evaluation framework that serves multiple functions: it enables performance benchmarking by objectively measuring agent performance against golden standards, and it ensures system reliability by automating the detection of performance regressions during updates. The agent's analytical depth is enhanced by a statistical context module, which quantifies deviations from normative behavior, ensuring all conclusions are supported by quantitative evidence including concrete data, percentages, and statistical comparisons. We demonstrate the efficacy of this integrated agent-development-with-evaluation framework through a case study on an insurance claims processing system. The agent, built on a modular architecture, leverages the BigQuery ecosystem to perform secure data retrieval, apply domain-specific business rules, and generate human-auditable justifications. The results confirm that this approach creates a robust, evaluable, and trustworthy system for deploying LLM-powered agents in data-sensitive, high-stakes domains.
- North America > Canada > Ontario > Toronto (0.40)
- North America > United States > New York (0.04)
- North America > United States > Texas (0.04)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Health Care Technology > Telehealth (0.95)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.89)
Datrics Text2SQL: A Framework for Natural Language to SQL Query Generation
Gladkykh, Tetiana, Kirykov, Kyrylo
Text-to-SQL systems enable users to query databases using natural language, democratizing access to data analytics. However, they face challenges in understanding ambiguous phrasing, domain-specific vocabulary, and complex schema relationships. This paper introduces Datrics Text2SQL, a Retrieval-Augmented Generation (RAG)-based framework designed to generate accurate SQL queries by leveraging structured documentation, example-based learning, and domain-specific rules. The system builds a rich Knowledge Base from database documentation and question-query examples, which are stored as vector embeddings and retrieved through semantic similarity. It then uses this context to generate syntactically correct and semantically aligned SQL code. The paper details the architecture, training methodology, and retrieval logic, highlighting how the system bridges the gap between user intent and database structure without requiring SQL expertise.
- Research Report (1.00)
- Workflow (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.52)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (0.35)
Business as Rulesual: A Benchmark and Framework for Business Rule Flow Modeling with LLMs
Yang, Chen, Xu, Ruping, Li, Ruizhe, Cao, Bin, Fan, Jing
Process mining aims to discover, monitor and optimize the actual behaviors of real processes. While prior work has mainly focused on extracting procedural action flows from instructional texts, rule flows embedded in business documents remain underexplored. To this end, we introduce a novel annotated Chinese dataset, BPRF, which contains 50 business process documents with 326 explicitly labeled business rules across multiple domains. Each rule is represented as a
- South America > Uruguay > Montevideo > Montevideo (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- (4 more...)
- Law (0.69)
- Education (0.46)
- Information Technology > Security & Privacy (0.46)
LLM assisted web application functional requirements generation: A case study of four popular LLMs over a Mess Management System
Gupta, Rashmi, Gupta, Aditya K, Jain, Aarav, Pandey, Avinash C, Gupta, Atul
--Like any other discipline, Large Language Models (LLMs) have significantly impacted software engineering by helping developers generate the required artifacts across various phases of software development. This paper presents a case study comparing the performance of popular LLMs--GPT, Claude, Gemini, and DeepSeek -- in generating functional specifications that include use cases, business rules, and collaborative workflows for a web application, the Mess Management System. The study evaluated the quality of LLM-generated use cases, business rules, and collaborative workflows in terms of their syntactic and semantic correctness, consistency, non-ambiguity, and completeness compared to the reference specifications against the zero-shot prompted problem statement. Our results suggested that all four LLMs can specify syntactically and semantically correct, mostly non-ambiguous artifacts. Still, they may be inconsistent at times and may differ significantly in the completeness of the generated specification. Claude and Gemini generated all the reference use cases, with Claude achieving the most complete but somewhat redundant use case specifications. Similar results were obtained for specifying workflows. However, all four LLMs struggled to generate relevant Business Rules, with DeepSeek generating the most reference rules but with less completeness. Overall, Claude generated more complete specification artifacts, while Gemini was more precise in the specifications it generated. Formally specifying software has remained one of the challenging tasks in software engineering. The challenges are apparent: the specifications must be well-defined, unambiguous, complete, consistent, and aligned with the stakeholder needs.
- Europe > Switzerland (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Natural Language Query Engine for Relational Databases using Generative AI
The growing reliance on data-driven decision-making highlights the need for more intuitive ways to access and analyze information stored in relational databases. However, the requirement of SQL knowledge has long been a significant barrier for non-technical users. This article introduces an innovative solution that leverages Generative AI to bridge this gap, enabling users to query databases using natural language. Our approach automatically translates natural language queries into SQL, ensuring both syntactic and semantic correctness, while also generating clear, natural language responses from the retrieved data. By streamlining the interaction between users and databases, this method empowers individuals without technical expertise to engage with data directly and efficiently, democratizing access to valuable insights and enhancing productivity.
Council Post: Why IoT Is The Next Frontier For AI And Prescriptive Analytics
Guy, a recognized industry thought leader, is the president of SmartSense, IoT solutions for the enterprise. In recent years, artificial intelligence (AI) and prescriptive analytics have been woven into the fabric of business operations across multiple verticals, ranging from retail and grocery to healthcare, pharmaceutical and more. Today, there is high demand for continuous streams of data from the physical world. IoT's accurate sensing capabilities coupled with its capacity to generate vast data resources make it a natural partner to the disciplines of pragmatic AI and prescriptive analytics. The ability of these businesses to integrate these related disciplines has the potential to transform their operations and increase profit margins.
A quantitative and qualitative approach to data cleaning
When we started learning COBOL in high school, one of the first things the teacher introduced was the concept of GIGO. GIGO stands for "garbage in, garbage out". If we input clutter mishmash data to a program, it will either error out or provide inaccurate results. This fundamental principle has not changed in machine learning programming. Moreover, it has become more relevant over time, considering the massive amount of data required to train a model for real-life artificial intelligence use cases.
AI for business users: a glossary
When you work with IT staff and data scientists, they're going to use acronyms that you might not be familiar with. It's important to know some of the basic terms and acronyms so you can communicate. Business users should make themselves familiar with these common AI terms to communicate well with the data teams. Artificial intelligence is a form of intelligence demonstrated by a computer. A computer can be programmed with logic and business rules that will enable it to "reason" through situations and come up with a conclusion.
Protecting payments in an era of deepfakes and advanced AI
In the midst of unprecedented volumes of e-commerce since 2020, the number of digital payments made every day around the planet has exploded – hitting about $6.6 trillion in value last year, a 40 percent jump in two years. With all that money flowing through the world's payments rails, there's even more reason for cybercriminals to innovate ways to nab it. To help ensure payments security today requires advanced game theory skills to outthink and outmaneuver highly sophisticated criminal networks that are on track to steal up to $10.5 trillion in "booty" via cybersecurity damages, according to a recent Argus Research report. Payment processors around the globe are constantly playing against fraudsters and improving upon "their game" to protect customers' money. The target invariably moves, and scammers become ever more sophisticated.
IoT Streaming Data is Going to the Dogs
The Internet of Things (IoT), with its ubiquitous sensors and streams of massive data for massive insights, has an estimated market valuation of nearly $2 trillion. Apparently, the "sensoring" of the world is a seriously big deal, generating insights into people, processes, and products on a scale that is almost incomprehensible. Certainly, $2 trillion is almost an incomprehensible figure. The corresponding forecasts for data-driven insights that lead to such a valuation are expected to be on a similarly large scale to justify those astronomical projections. But insights are not hardcoded within Raspberry Pi or Arduino kits, though IFTTT (If-This-Then-That) kits might be a satisfactory solution (more about that later).